首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   41706篇
  免费   2292篇
  国内免费   3454篇
系统科学   4590篇
丛书文集   1040篇
教育与普及   171篇
理论与方法论   91篇
现状及发展   893篇
研究方法   1篇
综合类   40657篇
自然研究   9篇
  2024年   102篇
  2023年   451篇
  2022年   664篇
  2021年   783篇
  2020年   827篇
  2019年   683篇
  2018年   707篇
  2017年   863篇
  2016年   900篇
  2015年   1325篇
  2014年   1957篇
  2013年   1724篇
  2012年   2531篇
  2011年   2586篇
  2010年   1926篇
  2009年   2415篇
  2008年   2230篇
  2007年   3104篇
  2006年   2801篇
  2005年   2575篇
  2004年   2249篇
  2003年   2006篇
  2002年   1712篇
  2001年   1421篇
  2000年   1316篇
  1999年   1183篇
  1998年   913篇
  1997年   855篇
  1996年   746篇
  1995年   613篇
  1994年   570篇
  1993年   538篇
  1992年   440篇
  1991年   402篇
  1990年   412篇
  1989年   300篇
  1988年   262篇
  1987年   171篇
  1986年   82篇
  1985年   57篇
  1984年   20篇
  1983年   2篇
  1982年   2篇
  1981年   19篇
  1955年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
101.
针对交易型B2B平台不同匹配规则对B2B交易的影响展开研究。基于讨价还价理论,分别刻画了B2B平台4种匹配规则下交易双方的讨价还价行为,并深入分析了不同竞争情形下的均衡结果。通过对比,研究平台不同匹配规则在不同竞争情形中对B2B交易的影响。研究表明,B2B平台上的交易价格取决于平台的交易匹配规则;匹配规则对B2B平台上的交易额(GMV)的影响与平台收取的佣金系数息息相关;匹配规则对B2B平台上买卖双方利润的影响因买方的竞争情形及最终产品的替代程度而有所差异。此外,匹配规则对交易价格、交易额和买卖方利润的影响在不同的竞争情形中表现出不同的强度。  相似文献   
102.
产品模块化设计既可以更好地满足客户的个性化需求,又能有效地降低新产品和再制造品的生产成本。在考虑产品模块化设计的情形下,以制造商主导的再制造闭环供应链为研究对象,分析了制造商自营回收模式和外包回收模式下产品模块化水平、零售价格、回收价格和制造商利润,为制造商回收模式的选择提供参考。研究表明:制造商选择回收模式不但跟模块化设计的成本参数有关,而且还跟产品模块化设计带给新产品和再制造品生产成本节省的大小有关。  相似文献   
103.
陈俊  李娅  张芥 《应用科学学报》2020,38(3):488-495
提出一种基于计算密集型与I/O密集型建立虚拟机动态能耗的数学模型方法.结合了设备运行状态参数,在模型功耗处于计算密集型时引入了虚拟机的CPU使用率与CPU频率,处于I/O密集型时引入了虚拟机的硬盘读写总字节数与内存读写总字节数计算功耗,并对功耗进行积分得出数据中心能耗.与常规方法相比该方法进一步细化了测量粒度,且在使用Wordcount运行任务与Sort运行任务进行节点能耗测试时,得出能耗的平均误差为0.062 5.实验结果在粒度细化的同时保证了常规方法的同级别测量精度.  相似文献   
104.
自然场景中文本检测易受光照、复杂背景、多语言文字、字体及尺寸等因素影响,该文提出了一种基于Itti视觉关注模型与多尺度最大稳定极值区域(maximally stable extremalregion,MSER)结合的自然场景文本检测算法.首先利用改进的Itti视觉关注模型提取文本特征图,并采用不同结合策略得到各尺度文本显著图;然后结合多尺度的MSER区域得到3种文本候选区域.根据文字与生成文本框的几何规则合并文本候选区域得到文本行;最后利用随机森林分类器除去非文本区域得到最终文本区域.实验结果表明,该方法对于自然场景图像中的文本检测具有较高的精确度和一定的鲁棒性.  相似文献   
105.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   
106.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   
107.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   
108.
We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Pólya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.  相似文献   
109.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   
110.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号